
It’s a familiar sensation: You start a text message and your phone’s autocomplete feature suggests several options for the next word, from banal to hilarious. “I love…” you or the coffee? Or you’re finishing an email and just typing the word “Allow” will prompt your app to suggest “Let me know if you have any questions” in light gray text.
Predictive language technologies have become so routine—embedded in smartphones, email services, and chatbots—that we hardly notice them anymore. But they raise a difficult question: What happens to a writer’s unique voice when AI routinely completes their thoughts or generates them from scratch?
As the chair of a large English department (and as an academic researching the effects of predictive writing), I have witnessed firsthand the challenges that generative AI systems like ChatGPT, Gemini, and Claude pose to individual expression.
This technology has become so fully incorporated into the writing process that it is almost impossible to imagine coming across a scene from the not-so-distant past: a writer, alone, with a pen and a sheet of paper, struggling to find the best way to translate his ideas, arguments and stories into something readable and interesting.
Predictive text leads to predictive writing
However, as many scholars have pointed out, this view of writing was never entirely accurate.
Essays have always incorporated guidance from professors, professors, or writing tutors. A friend might give you feedback, or a quote from your favorite novelist might offer inspiration. The language we use is never completely “ours,” but is based on millions of sources absorbed throughout our lives.
Just as it is a myth to imagine that writers compose in a vacuum, there has never been a clear line between genuine human expression and machine-generated text. As scholars have pointed out, we have long been using machines to communicate. Every technological advance (from the pen and typewriter to the word processor) has brought about changes in the way humans express themselves.
However, the ubiquity of predictive language technologies directly threatens human creativity, or as one study put it, “Predictive text encourages predictive writing.”
Because generative AI composes and suggests text in predictable, highly standardized patterns, its results can be read as if they were disguised versions of what linguists call “phatic expression.” These are all too common phrases that function as social glue rather than as transmitters of feelings: “How are you?”, “Have a good day” or “See you soon.”
But this glue can lose its strength if the technology is used in wrong situations. Using artificial intelligence to compose a social media post in the wake of a tragedy, or using it to write a fan letter to an Olympic athlete, seems disingenuous.
People are starting to pick up on generative AI prose, not because it’s clunky or poorly written, but because it all sounds the same. This is because large language models are trained on gigantic masses of human writing examples and predict text based on probabilities and commonalities.
Those predictive results often end up producing a singular, recognizable voice. Or as Sam Kriss explained in a recent essay for The New York Times Magazine: “There used to be many writers and many different styles. Now, increasingly, one uncredited author makes it essentially everything.”
Leaning towards a cultural milieu
Generative AI is accelerating the types of cultural convergence and uniform expression that were already occurring.
For example, linguists have shown that regional accents in the United States are fading and homogenizing due to a combination of migration, urbanization, mass media, and social networks. Meanwhile, American English continues to supplant many other forms internationally due to the global dominance of US-based media, television, film, and more.
Are we all meant to write and speak equally? Generative AI doesn’t know in advance whether you call soft drinks “soda,” “pop,” or “coke.” If you let it choose, it will simply select “refreshment” for you, since that is the most common term in its training data.
By contrast, what people often value in a personal essay, a novel, a poem, or a message to a grieving friend is the human author’s ability to demonstrate, clearly and distinctively, something powerful and unique.
Make chatbots less attractive
So how can teachers force students to create their own voices? How is that task different today than it was a decade ago?
It’s helpful to think here about where generative AI struggles and why.
Chatbots are great at creating relatively bland, highly readable prose, since that’s what is ubiquitous in their training data. But they struggle to create the kind of radically unexpected changes that appear in novels like James Joyce’s. Ulysses or songs like “Bohemian Rhapsody” by Queen.
There are several techniques to encourage these types of stylistic leaps among student writers.
Teachers can incorporate unpredictability into the task. Creative writing teachers have used techniques for decades to encourage innovative thinking. They could ask students to write a poem and then rewrite it, avoiding the letter “E,” or limit themselves to two adjectives at most.
Another tactic is to have students draw on clearly personal experiences. Teaching students how to explore the connections between characters and conflicts in a novel with people and situations in their own lives makes turning to chatbots less appealing, if not completely useless. On the contrary, the impersonal tasks: “Discuss the symbolism of the color green in the great gatsby—It will probably produce generic and predictable results.
Teachers can also ensure that their students’ work has a variety of readers. If it’s just the teacher, students are less likely to invest time in cultivating their own voice. But if they have to write an essay or a story for, say, their friends or grandparents, they may have more incentive to speak as themselves.
There are many other strategies, from being forced to reverse the argument of an essay to favor the other side, to interviewing strangers for an assignment and including their quotes.
The bottom line: Writers have access to sources (and language) that machines cannot access or generate. Having students wrestle with unconventional modes of composing and revising is critical to ensuring that technology is more of a useful thinking partner, but not a substitute for their voice.
Gayle Rogers is a professor of English at the University of Pittsburgh.
This article is republished from The Conversation under a Creative Commons license. Read the original article.

