For the past several months, I’ve been delivering and refining a workshop for educators called “AI and the Teaching of Writing.” The purpose of this workshop is for teachers of writing to have time and space to consider the short and long-term impacts of generative AI on the work they do. Everyone brings a signature writing project, and we spend a few hours deconstructing the project (with the option to use generative AI for assistance), assessing the “vulnerability” of key tasks to inappropriate use, and considering both AI-resistant and AI-assisted ways the project could be updated or transformed. We pitch our redesigns to each other and articulate some key priorities for how we might shift from reactive to proactive in our approach to generative AI.
I wanted to share some of the core principles of that workshop, principles that have evolved as I’ve watched and learned from teachers designing responses to generative AI.
To be clear, I’m biased: I love to write, and writing has been an important part of my life. I was a middle and high school English teacher for a decade and watched hundreds of students learn through the challenge of trying to express themselves in writing. In two years of intensive work on generative AI, I have seen nothing to convince me that writing is an obsolete skill no longer worth learning. It is changing, but it still matters, and it’s worth fighting for. That’s where I’ll start.
Principle 1: Writing matters.
A wide body of research shows that learning how to write has cognitive, social-emotional, and civic benefits, primarily because the act of writing can be both rigorous and deeply personal. This 2007 report from the Carnegie Foundation is a succinct summary of research not only on the value of writing, but also on 11 elements of the writing process that contribute to that value. In looking over this report and any number of other sources, I’ve found that the process of composition is the part of writing that matters most for student learning, more so than the kind of writing product they are asked to create.
This emphasis on process is important. The Carnegie report quotes the National Commission on Writing: “If students are to make knowledge their own, they must struggle with the details, wrestle with the facts, and rework raw information and dimly understood concepts into language they can communicate to someone else. In short, if students are to learn, they must write.” I don’t know about that last part… writing is certainly not the only medium through which students can do this kind of work, but the research shows that if we want writing to have value for student learning, we should be guiding students through (and offering them feedback on) a collaborative process that involves actively taking what they know and shaping it into a form that helps another person understand what they know.
Principle 2: The way we write is changing.
Sarah Elaine Eaton has been studying the impact of artificial intelligence on academic integrity for years, and she argues that the emergence of generative AI is pushing us into a “post-plagiarism era” where a hybrid writing partnership between humans and AI will become normalized. This partnership is going to change not just how we write, but how we define writing.
This is not to say that all writing will involve generative AI; it’s to say that hybrid writing will become a widespread and accepted category of writing. We’re already seeing indicators of this in the world: from the “canvas” features in ChatGPT and Gemini that encourage hybrid writing to the emergence of “vibecoding” and “vibeworking” to Anthrophic and OpenAI targeting college students to the Shopify CEO’s viral memo that AI use is now a baseline expectation of all employees.
The question, of course, is whether these changes to writing in the world should be replicated or even encouraged in school. The argument that we should teach students how to use AI because they are going to use it in the workplace is not compelling to me: we have no idea what AI is going to look like by the time even our oldest students enter the workplace, so what are we supposed to teach them? However, I think we should be engaging students on AI because of what’s happening now, not because of what might happen.
People are using AI. In my work, students and educators tell me stories of how they use tools like NotebookLM and voice mode and chatbots and research assistants to help them work at all stages of the writing process, from prewriting to polishing. They are training AI bots on previous examples of their work, uploading notes and other artifacts to tailor the content of their writing, and using AI-generated feedback to make revisions. And, the majority of this activity is unseen: people have revised their work too deeply for it to be easily classified or flagged as “plagiarism,” the work is too good to be labeled “AI slop,” they are too engaged in the process to call what they’re doing “cheating” or “cognitive offloading.” Yet, the culture of AI at many schools is such that the people using AI in these ways are not interested in sharing their activity because they fear being punished or shamed.
Hybrid writing is happening in school in ways we do not yet fully understand. What is not happening is open discussion and learning about what that should look like and how we know if it’s helpful or not. Who is teaching people to “struggle with the details, wrestle with the facts, and rework raw information” even if they’re using AI as an assistant? How are we acknowledging that the act of writing is changing while advocating for its core value?
Principle 3: Doing nothing is the riskiest choice.
A few people have sent me Scott McLeod’s “AI Vicious Cycle” image, which is just the latest articulation of a fear I’ve been hearing at schools since ChatGPT arrived.

Unfortunately, this represents a plausible future we face in education, especially in writing. For two years, I have watched teachers struggle with students who try to pass AI-generated content off as their own. For almost as long, I have heard school administrators, students, and families express frustration that teachers are using generative AI for feedback on student work, narrative reports, and university recommendation letters, among other important forms of communication.
I think these complaints are less about generative AI and more about what generative AI enables: low-quality work that suggests a lack of care. In schools, these forms of writing are meant to be relational and effortful, and people are rightfully objecting to AI-generated writing that reflects no joy nor investment from the person behind the words. It’s easy to use generative AI to produce this kind of robotic writing quickly. As I mentioned in the previous section, it’s harder but more useful to learn how to use it to support high-quality composition, as a supplement that creates better, more interesting products as opposed to generic, uninspiring ones.
I think McLeod’s vicious cycle becomes more likely if schools do nothing, if they don’t engage students and educators in understanding productive use of AI, if they don’t shift policy and practice to be responsive rather than avoidant.
Principle 4: In-class writing should not be the only way we “engage” with AI.
The question we should be trying to answer in the writing classroom is not, “How do we keep students from using generative AI?” The question we should be trying to answer is, “How do we know students are learning?” In-class writing is not the best answer to that second question.
I am regularly surprised by the number of writing teachers who have responded to generative AI by simply moving all student writing into class. I have searched and searched and have failed to find any compelling pedagogical, research-based, or even ethical argument that in-class writing is the best way to assess how well a student composes when the prompt requires skills beyond recall and summary. What I have found is that we have spent decades in education trying to move away from the high-stakes, low-validity environments that time-based assessments can create. When we move writing into class, what is the impact on the majority of our students who are willing, curious, eager, or even excited to write? Who benefits from that move, and what exactly are those benefits?
In “Ten persistent academic integrity myths,” Mark Bassett and Kane Murdoch write, “If a single intervention could prevent academic misconduct, the problem would have been solved long ago.” Academic integrity is complex, and so are the reasons why students might violate it. Moving all writing into class is a pedagogical move that limits both students’ ability to write freely and our ability to assess writing. Searching for a technological solution to monitoring student writing like an AI detector or process tracker leads to a kind of endless “arms race” between students and teachers that becomes its own vicious cycle. For example, take a look at this Human Typer Tool and consider how long process trackers like Google Docs revision history will be useful.
All of this boils down to what Eric Lars Martinsen calls “insist on process.” In the age of generative AI, writing products are not sufficient tools for understanding and assessing a student’s learning. The process in writing classrooms needs to be more visible and more explicit, whether or not AI is part of that process.

Principle 5: The answer to a technological disruption is not necessarily more technology.
“Insisting on process” does not require that we or our students use generative AI; rather, it asks us to create structures that make it easier to assess learning, whether or not students choose to use generative AI. By insisting on process, we give students the responsibility to document their process, compose writing, and reflect on their work.
For example, I can see launching writing projects with in-class writing, as Tim Donahue does, or diversifying assessment methods by incorporating oral defenses or process portfolios in the model of what’s required in International Baccalaureate Art, or asking students who use AI to share transcripts of their interactions, as Mike Kentz has, or challenging students to reflect on good writing and assess AI against those criteria, as this wonderful English teacher on TikTok does.
For years, math teachers have made explicit in homework and on assessments which problems should be “calculator-free” and which are appropriate for use of calculators. World language teachers have made shifts in practice that both integrate and intentionally resist tools like Google Translate and Duolingo. Teachers of writing are now immersed in their own process of responding to technological change.
During a workshop a couple of weeks ago, a history teacher told me that as he was designing responses to generative AI, he found himself returning to “first-century” strategies as much as he was thinking about AI-assisted strategies. For him, this was reassuring: analog teaching and learning has a place in a classroom, even—especially!—if that classroom includes use of AI.
Principle 6: As we make moment-to-moment decisions, we must keep the future in mind.
In 2009, the composition expert Kathleen Blake Yancey published a report called “Writing in the 21st Century” in response to the explosion of online writing and social media. Writing still mattered, she argued, but the craft of writing was changing, and therefore so should the craft of teaching writing. She outlined three calls to action, and I think they are even more relevant in the age of generative AI:
Develop new models of composing
Design a new curriculum that supports those models
Create new pedagogies that enact that curriculum
As teachers of writing wrestle with how to respond to the day-to-day challenges that generative AI presents, I encourage them to keep Yancey’s calls to action in mind. As your understanding of generative AI and student use deepens, which new models of composing are emerging? What adjustments can we make to our curricula that acknowledge and support high-quality approaches to those models? Which new (and old!) pedagogies would enable you to act on that support?
Upcoming Ways to Connect With Me
Speaking, Facilitation, and Consultation
If you want to learn more about my work with schools and nonprofits, take a look at my website and reach out for a conversation. I’d love to hear about what you’re working on.
In-Person Events
June 6. I’ll be delivering a keynote and facilitating two workshops (one on AI, one on student-centered assessment) at the STLinSTL conference at MICDS in St. Louis, MO, USA.
June 9. I'll be facilitating workshops on AI at the Summer Teaching and Learning Institute at Randolph School in Huntsville, AL, USA.
Links!
“The assignment was simple: have a conversation with a chatbot about the history of attention, edit the text down to four pages, and turn it in. Reading the results, on my living-room couch, turned out to be the most profound experience of my teaching career.” A moving and optimistic take on the future of the humanities.
T.J. Locke, Head of School at Episcopal Academy in Pennsylvania, tells the story of how he worked with a veteran math teacher on the topic of generative AI. This could be a model for human-centered, collaborative exploration of AI between teachers and administrators.
Math professor Robert Talbert has been working on mastery-based grading in his classes for years, and he wrote an update about how his assessment practices have changed in response to generative AI.
Anthropic looked at more than 500,000 interactions and published a report on how university students are using Claude.
Cult of Pedagogy has a very good introduction to competency-based learning.
Marc Watkins argues that if people better understood the downstream effects of using process trackers to monitor student writing, no one would be using them.
While we’re on the topic of longstanding practices that generative AI is disrupting, consider a homework audit.
I'm a veteran teacher who believes passionately in the value of being able to read complex texts and to write well.
I also live in reality.
We are transitioning to an audio/video world and those of us who are desperately fighting it are doomed to disappointment.
To be clear, I'm not happy about it, but I also have learned to choose my battles carefully and use my time effectively.
I predict that within ten years writing will transition to something akin to carpentry - something a few people choose to master, and they will produce high quality products for those willing to pay a premium. However, the majority of people will be happy with the mass-produced products.
I see writing becoming an elective course instead of the cornerstone of core classes. I think we will be spending time showing students how to effectively use AI and question/check accuracy.
To counter this, I suggest we return to the old-fashioned board where students will be orally tested by an instructor for smaller assessments and by a group of instructors for the major assessments.
We are in what I call the "AOL Dial-up Phase" of this technology and those fighting the tide remind me of the nuns I had in 2nd grade who thought the emergence of calculators was a sign of the apocalypse. AI (which I'm following very closely) is making huge leaps on a weekly basis. It's unlike anything I've seen in my lifetime.
From what I can see, schools are not responding quickly enough to what is happening. Perhaps they are assuming that the changes will progress gradually as they did with the advent of the internet in the mid-90s. They are sadly mistaken.
I so agree. Tools have always changed how we write. Twain didn't like the typewriter. People resisted word processors. Yet here we are. I've been trying to come up with approaches that use AI to support writing not write for you (my latest book goes into that). Good post!