Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-22T02:18:50.788Z Has data issue: false hasContentIssue false

AI and the Everyday Writer

Published online by Cambridge University Press:  08 October 2024

Rights & Permissions [Opens in a new window]

Abstract

Type
Theories and Methodologies
Copyright
Copyright © 2024 The Author(s). Published by Cambridge University Press on behalf of Modern Language Association of America

Writing has been a building material and a binding material, an adhesive of trust that stabilizes institutions through its documentary and communicative affordances. Writing norms have crystallized around the assumption of human authorship—providing a “constant . . . symbolic ground, made up of human-constructed sign systems” as Matthew Kirschenbaum and Rita Raley put it. But large language models (LLMs) represent a new mediation—one with the capacity to automate social infrastructures held together through text. Their affordances invite speculation about their consequences: with their ability to rewrite sentence fragments into edited prose, they represent a potential reduction in the scarcity of standard written English; with their ability to produce customized writing at scale, LLMs represent the potential for new modes of written propaganda; with their potential to automate routine writing tasks, they represent the potential for freedom from drudge work; and with their ability to simulate human communication, LLMs represent the potential for new crises of social trust. As Kirschenbaum and Raley observe, LLMs have “radically transformed this technolinguistic situation.”

We emphasize, however, that the transformative potential of LLMs is realized only through implementation. LLMs will likely initiate “structural transformations in language practices,” but such transformations are now surfacing in disparate ways as these new technologies meld with preexisting, embodied, and stubborn writing practices that are deeply entrenched in complex systems of bureaucracy, legal regulation, labor, and power. Backed by AI discourses that promise revolutionary efficiency—discourses steeped with the language of utopia, dystopia, speed, and inevitability—LLMs threaten to arrive by force. Yet they must still meet the real, powerful, and sometimes highly mundane constraints of everyday writing. We argue that theories of language and AI must account for the activity of uptake and implementation on the ground, which, at least in the near future, will be messy, incomplete, uneven, chaotic, and perhaps even boring.

To chart these incomplete implementations in progress, we have been interviewing everyday writers who have integrated LLMs into their composition practices (n = 23, so far). The study is framed around these questions: How do writers implement, cope with, and collaborate with these new and potentially invasive writing technologies? To what extent do writers distinguish between their writing and the contributions of the AI system? How do they see their writing processes changing, and what futures do they see for writing? By everyday writers, we mean people who spend significant portions of their day writing for work and civic activity (and sometimes pleasure) but who do not define themselves as writers or work in careers defined by writing such as journalism and creative writing. As Deborah Brandt argues in The Rise of Writing, the information economy's insatiable demand for symbol manipulation—“knowledge work”—has forced many workers to reorient their labor around the production of prose. These workers write for significant portions of their day, and the skills and competencies they develop at work often support other parts of their lives. Yet they do not define themselves as writers, both because of the particular “stronghold” of the literary writer in popular imagination (Brandt, Rise 97), an identity to which they are often averse, and because they engage with writing instrumentally (although not simplistically), as a means to an end. We turned to these everyday writers because they tend to be more committed to their goals than to the writing that helps them achieve those goals, and they have little allegiance to traditions that might define the work of the writer. In short, we expected them to be first adopters of a technology that promises writing efficiency.

Our focus on the textual production that facilitates information capitalism may seem far afield from this journal's usual focus on cultures of literary production and critique; however, these forms of writing share multiple points of entanglement. The writing economies of the contemporary workplace, civic contexts, and higher education are interdependent, and their associated writing practices share a semipermeable boundary. High demand for workplace writing fuels demand for required college courses in the language arts. In turn, the teaching power required for introductory language arts courses funds graduate assistantships in English and preserves faculty lines across the department. Given the precarity of so many workers with advanced degrees in the language arts, freelance work in the information economy subsidizes the production of contemporary creative writing and literary scholarship. That work now threatens to be automated. Kirschenbaum and Raley note that scholarly and pedagogical practices are rapidly shifting in response to LLMs, and our profession is equipped to process that change. One professional response, the MLA-CCCC Joint Task Force on Writing and AI, has focused on how language models will shape the student experience and the economics of higher education as they relate to the language arts. However, we also want to know how the language arts shape larger economies of writing. Consequently, we're turning the tools of our profession on everyday writers, millions of whom have absorbed—to some extent—the values we profess about language and writing as they pass through our required and elective courses in composition, language, and literature.

In what follows, we offer brief examples from our data that demonstrate the mundane complexity of AI implementation, specifically as it relates to our participants’ use of LLMs and to their own writing voice. The concept of voice has long vexed subfields of the language arts from composition and rhetoric to literary studies and creative writing. In our journals, books, and classes, we've asked questions about how voice is established and developed, how it is differentiated and influenced, what linguistic features define it, whether it's a feature of the text or manufactured in reader response, how narrative techniques and theories trouble the concept, and how it relates to identity and notions of agency and subjectivity. Most contemporary theory, influenced by postmodernism, has drifted away from the idea that an “authentic voice” can come from a coherent human subject; however, in the contexts we're studying, voice persists as a stable and durable concept that acts as a point of resistance against AI and plays a structuring role in rhetorical decision-making as writers collaborate with LLMs.

That ideas about voice persist among writers working with LLMs is not surprising, perhaps, because voice has circulated frequently as a keyword in discourse around AI. ChatGPT version 3.5 has been one of the most commonly used LLMs, since it is the version that has been offered for free through OpenAI's website. Widespread social experimentation with ChatGPT led to critiques on social media and in the popular press that its writing had a disembodied “robovoice”; in some examples, attempts to make it mimic different kinds of voices and dialects led ChatGPT to spew problematic textual caricatures of minoritized discourses. Indeed, some of the most popular applications meant to modify the outputs of LLMs have purported to transform AI-generated text (by ChatGPT and a host of other LLMs) into a more “human” sounding writing voice—and thereby help users circumvent AI detectors.

For many of the writers in our study, voice exists as a metaphor framing a bundle of concerns related to AI text generation and machine-assisted composition. It helps writers generate heuristics of value as they decide what practices can be—or should be—off-loaded to a machine. Alan M. Knowles has made the distinction between human-in-the-loop writing (HITL) and machine-in-the-loop writing (MITL), whereby HITL writing means AI generates most of the text and is subject to human oversight and MITL writing means humans retain the “majority of the rhetorical load” and AI tools support the writing process. This distinction of who or what carries the “rhetorical load” of a piece of writing helps frame the important role for voice that writers expressed in our study: it is a vector that elevated the status of MITL writing over HITL writing, but, crucially, that value is also pegged to context-sensitive anxieties surrounding the use of AI.

We interviewed two academic administrators, Chris Hargrove and Mario Delgado,Footnote 1 both of whom worked in midsize comprehensive universities in the western United States. Although both Hargrove and Delgado considered themselves authors because of their published academic research, it was their workaday writing that they subjected to AI and that we focused on in interviews. They enthusiastically used ChatGPT for their “slog” of bureaucratic work such as references, nominations, and evaluations. These documents were either faits accomplis or based on original materials that the administrators had reviewed themselves and judged accordingly, only then using ChatGPT to write up formal notes. Before using ChatGPT, they often used templates or examples of similar documents they'd written in order to draft new versions. Now, they honed their prompts and asked ChatGPT to emulate examples of their previous writing. Both administrators reviewed the documents carefully after ChatGPT wrote them—“I need to be able to sign my name to it,” Hargrove said, invoking a managerial concept of authorial responsibility (see Brandt, “‘Who's the President?’”). For both administrators, these ChatGPT-drafted documents “passed” as their own writing, often netting their faculty members the awards, honors, promotions, or renewals the documents were used to nominate them for. Their orientation to these genres of writing suggested that their ethical obligation was to advocate for their faculty members’ careers—an obligation that they both saw as central to their work—but not necessarily to the bureaucratic forms that the upper administration or granting agencies required to support that advocacy. It may be significant that these administrators were also academic researchers as well as administrators and consequently had a sophisticated and strong relationship to authorship and responsibility.

When writers talk about “passing,” or hiding machine-generated contributions, as many of our interviewees did, we begin to see some of the most obvious concerns about voice in contemporary workplace writing. That is to say: What value does a human voice have, and how does a human voice respond to demands for content and customization, especially at scale? Steve Winters, an online content creator who uses AI, still outsourced some of his work to gig-economy services (think Fiverr or Upwork) and noticed that a lot of the workers on those platforms seemed to be using AI in their pitches now. “I'm not even mad,” he noted, recognizing that he doesn't pay them much and the work isn't that intellectually engaging. He preferred the pitches that were tailored to his requests though, rather than the “spraying and praying” type of pitches that tended to sound more generically produced by AI. When Winters uses AI in his writing he generally mentions that to his client—although not to the eventual readers of the work. He cataloged a variety of writing genres in digital commerce that are now largely written by AI: email subjects for abandoned digital shopping carts, search engine optimization (SEO) keywords, content for specific blogs. He described first learning how to write content that would be good for SEO: search on a keyword, then find the top blog post for that keyword, and then rewrite that post to be yours, mimicking it but in your own voice. Like Hargrove and Delgado, he sees a precedent for current practices; using ChatGPT meant doing the same thing as SEO mimicking, but with AI.

But even in the midst of media flows where the primary goal of text and writing is less about the rhetorical purpose of any single text and more about creating multiple, customized content streams that channel consumption, Winters collaborated with AI only insofar as he was still present in the writing. He generally edited his prompts as well as ChatGPT's output to reflect his own voice—with a few exceptions. When finding subject lines for emails about abandoned shopping carts, for instance, he noted that ChatGPT was better than he was: it added emojis and nailed the genre. “And that's just not me,” he shrugged; in that case, he ceded his writing entirely to ChatGPT.

Winters's concern about voice and quality appear in both the production of writing and its consumption. As our conversations with the academic administrators and Winters demonstrate, there was widespread perception among our interviewees that the default voice of ChatGPT did not match their own. Some participants had confidence that they could identify that voice because it sounded robotic to them. Dwayne Curtis, for example, worked in human resources on diversity initiatives for large technology companies and used AI for a variety of workplace writing: market analyses, policy and procedure writing, and internal communications. Curtis's time dabbling in creative writing induced him to think about voice, and he spent significant time shaping ChatGPT output to infuse his own voice for the sake of authenticity and to conceal his use. As with the academic administrators, we see in Curtis an anxiety about alienating coworkers and an eagerness to avoid obviously automated text, which led him to edit the AI-generated drafts to maintain his personal voice and style. Kirschenbaum describes this phenomenon among authors who were early adopters of word processing as well, including doctoring fonts, rumpling pages, and adding annotations that would disguise the fingerprints of the computer in the composition (37).

Rob Hartson's work hiring a personal assistant represents a twist on Curtis's experience. Hartson works in the film industry and found ChatGPT so helpful that he evangelized it to his colleagues. However, when faced with a stack of applications for a personal assistant, Hartson eliminated candidates who appeared to use ChatGPT for all their materials. He claimed to be able to identify the robotic nature of the voice. Like Winters the content creator, Hartson advocated ChatGPT but discounted the value of texts produced by writers who appeared to rely too heavily on it. Along with the other writers in our study, Hartson had a higher valuation of MITL writing—that is, writing that expresses human voice and judgment, even if it includes some machinic collaboration. But if this valuation of human voice dominates writing assessment in the workplace (as it appears to be dominating writing assessment in higher education), it will unfold against emerging evidence that suggests not only that people are overestimating others’ use of LLMs (Purcell et al.), but that their ability to identify AI-generated text is not particularly good (Clark et al.; Gao et al.). Hartson's case gestures at an emerging dynamic of synthetic text exacerbating issues of social trust.

Four years ago in PMLA, Wai Chee Dimock worried about the existential threat of automation regarding writing, self-driving cars, and other applications of AI. We wouldn't dismiss AI's existential risk to the language arts in higher education, but in the near future, AI will likely entangle itself within existing writing practices rather than overtake them. AI could eventually shatter contemporary systems of writing, but right now, on the ground, we are seeing resistance to AI writing along a number of axes, one of which is voice. People who write for work are preserving their investment in human voice and authorship. The “robovoice” of ChatGPT 3.5 has arguably been tempered with subsequent models such as GPT-4, GPT-4o, and Anthropic's Claude series; moreover, future iterations of language models promise to replicate our writing voices by fine-tuning on our digital data. So this resistance could be ephemeral: voice may be a source of friction now, but that resistance could migrate to other concerns as new features of the technologies are introduced. Alternatively, as we increase our sample size, we may discover different values of authorship at play. In her study of early adopters of AI on YouTube and TikTok, Stacey Pigg observed that student writers lacking disciplinary expertise eschewed their own authority when they worked with ChatGPT.

As generative AI moves from speculative possibilities to implementation among different writing populations, another dynamic to watch is its uptake by nonnative English speakers who must write in English for work. Nature's fall 2023 survey of postdoctoral researchers in STEM fields suggests that nonnative English speakers are at the vanguard of LLM uptake (Nordling). The survey found that almost one-third of postdoctoral researchers—a highly international population—reported that LLMs changed the way they write papers and that sixty-three percent of that group used LLMs for “refining text.” One postdoctoral researcher is quoted as saying the technology makes his research sound more “native.” The desire to be read as a native speaker—with access to a tool that facilitates it—relates to long histories of discrimination against nonnative speakers of English and to the requirement writers face to write in English to access higher education, publishing, business, government, and science internationally. Referencing this history of discrimination, Laura Gonzales suggests that the anxiety among educators about AI stems not from how students might use it but from who might use it: AI enables “multilingual speakers and writers to draft content that may ‘deceive’ teachers and administrators into thinking these writers are skilled at composing in Standardized White English.” Violeta Berdejo-Espinola and Tatsuya Amano argue that AI could support equity in the sciences by mitigating the tax on nonnative English speakers who must employ standard written English for publication.

This history of power and discrimination, then, forces us to consider some problems of asserting human voice as a marker of value: To what extent does privileging human voices over “robovoices” reinscribe the native-speaker hierarchy of value? Does the output of the most popular LLMs represent a new kind of linguistic imperialism, given the data they have been trained on? What semiotic reservoirs made possible by world Englishes are circumvented when LLMs transform accented English into what resembles standard American English (see Canagarajah)? Given the perceived and actual role that our required language arts courses play in allocating and adjudicating an “authentic” and “standardized” voice in written English, our profession will need to think carefully about how our evaluation of text—synthetic and hybrid—reinscribes hierarchies of linguistic value. In the meantime, our profession can leverage our analytical tools to understand what values persist and disappear during this radical transformation in writing.

Footnotes

1. We have used pseudonyms for the names of our interview subjects in this essay.

References

Works Cited

Berdejo-Espinola, Violeta, and Amano, Tatsuya. “AI Tools Can Improve Equity in Science.Science, vol. 379, no. 6636, 9 Mar. 2023, p. 991.CrossRefGoogle Scholar
Brandt, Deborah. The Rise of Writing: Redefining Mass Literacy. Cambridge UP, 2014.CrossRefGoogle Scholar
Brandt, Deborah. “‘Who's the President?’: Ghostwriting and Shifting Values in Literacy.College English, vol. 69, no. 6, July 2007, pp. 549–71.CrossRefGoogle Scholar
Canagarajah, A. Suresh. “The Place of World Englishes in Composition: Pluralization Continued.College Composition and Communication, vol. 57, no. 4, June 2006, pp. 586619.CrossRefGoogle Scholar
Clark, Elizabeth, et al. “All That's ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text.” arXiv, 2021, arxiv.org/abs/2107.00061.Google Scholar
Dimock, Wai Chee. “AI and the Humanities.PMLA, vol. 135, no. 3, May 2020, pp. 449–54.Google Scholar
Gao, Catherine A., et al.Comparing Scientific Abstracts Generated by ChatGPT to Real Abstracts with Detectors and Blinded Human Reviewers.NPJ Digital Medicine, vol. 6, no. 1, 2023, https://doi.org/10.1038/s41746-023-00819-6.CrossRefGoogle Scholar
Gonzales, Laura. “Fostering Learning, Curiosity, and Community in the Age of Generative AI.English Education, vol. 55, no. 3, Mar. 2023, pp. 214–16.Google Scholar
Kirschenbaum, Matthew. Track Changes: A Literary History of Word Processing. Harvard UP, 2016.Google Scholar
Knowles, Alan M.Machine-in-the-Loop Writing: Optimizing the Rhetorical Load.Computers and Composition, vol. 71, 2024, article no. 102826, https://doi.org/10.1016/j.compcom.2024.102826.CrossRefGoogle Scholar
Nordling, Linda. “How ChatGPT Is Transforming the Postdoc Experience.Nature, vol. 622, 2023, pp. 655–57.CrossRefGoogle Scholar
Pigg, Stacey. “Research Writing with ChatGPT: A Descriptive Embodied Practice Framework.Computers and Composition, vol. 71, 2024, article no. 102830, https://doi.org/10.1016/j.compcom.2024.102830.CrossRefGoogle Scholar
Purcell, Zoe A., et al. “Fears about AI-Mediated Communication Are Grounded in Different Expectations for One's Own versus Others’ Use.” arXiv, 2023, arxiv.org/abs/2305.01670.Google Scholar