ChatGPT is about to turn two, which means schools are about to enter their third year of wrestling with generative AI. I’ve been thinking a lot about what has changed over these two years, and also what has stayed the same. I think there are some AI ideas that we should be ready to let go of, and a couple we really should hold on to no matter what.
What we need to let go of
That “chatbot” is the same as “generative AI.”
For almost two years, I have heard people use “ChatGPT” in the same way people in the U.S. use “Kleenex” or “Xerox”: a single tool created by a single company has become representative of a much broader category. This is misleading 1) because ChatGPT is one of dozens of chatbot options (both general purpose and specialized), and 2) because chatbots are just one early application of generative AI.
Generative AI is multimodal: high-quality image, audio, and video generation tools are already widely available (which is why deepfakes are such a concern). There are data processing tools and workflow automation tools and coding assistants. There are AI tools you can talk to, ones that talk to you, and ones that can “see” images. Multimodality is one of the reasons generative AI could transform accessibility for people with disabilities.
Furthermore, the intention of the companies that own these large language models was never for us to use separate websites or apps like ChatGPT, Claude, or Gemini. The goal of these companies is to have generative AI so deeply integrated into our personal technology ecosystems that we barely notice it’s there. This integration phase is happening right now. Look at Copilot’s integration with Office365, or Gemini’s integration with Google Workspace, or the way “Apple Intelligence” will be integrated into Apple devices. And don’t get me started on AI agents, which will not only be able to generate ideas, but also act on them.
This is the scenario we should be preparing for and teaching towards: bots with multimodal and agentic capabilities that are embedded in our favorite devices and platforms.
That we can tell when people use AI and when they don’t.
Study after study has shown that AI detection tools have never been reliable and are getting worse because humans are getting better at deceiving the detectors. They also show that humans have never been good detectors of AI-generated content, and we’re getting worse because generative AI is getting better. You can try Leon Furze’s deepfake game to assess your own abilities (I got 7/10).
Most schools I work with have caught students using AI in dishonest ways, but those uses have been clumsy: students copying and pasting AI-generated text without editing it, or uncritically allowing AI mistakes or fabrications into their work, or using AI to produce work that is an order of magnitude more sophisticated than their previous work. For what it’s worth, I see this kind of use among adults, too: recommendation letters or narrative reports that are suspiciously polished or generic, emails and feedback that lack a specific sense of audience or context, etc. As a student once put it to me, “The only people who are getting caught are the people who are bad at AI.”
Clumsy uses are just the tip of the iceberg. Far more students and adults are using AI in subtle ways, ways that we should spend time understanding rather than policing. Not only is generative AI improving, but people are getting better at using it. Both of these things make detection harder.
That policing AI use is more important than teaching AI literacy.
Unless your school has committed to a full-blown ban on AI use (and thus moved to fully supervised assessments) or to unlimited use of AI, then you are caught in a widening gray area, trying to define what “appropriate” and “inappropriate” use of generative AI is. A policy is an inadequate tool to address effective and ethical use of AI.
Right now, every student with access to the internet has access to chatbots that can serve as mediocre personal tutors and academic assistants. Assume that the access to and the quality of that tutor/assistant is only going to improve, and relatively quickly. Imagine a scenario where every student at your school has a competent AI tutor on their mobile device, a tutor they can talk to, that can both take notes and read their notes, that can prep them for tests, that can edit their writing in real time. What is the role of school in that scenario? How could we possibly police AI use in that scenario? Who is going to teach students how to use this powerful technology that’s so accessible to them?
AI should be curriculum, not just policy. It should be an opportunity for students to learn, not just to comply. Look at how Stefan Bauschard asked his debate students to use AI to generate ideas and then use their own minds to interrogate, elaborate, and prioritize those ideas. Or, how Annie Fensie developed “worked AI examples” with prompts and sample AI output that teaches students how bots can be prompted to question student thinking, to support effective study skills, or to even help them locate their intrinsic motivation.
Where schools can have a deeper, longer-term impact on more students is by giving them the skills and knowledge they need to make good decisions about AI, especially when they find themselves in situations where the temptation to make a bad decision is strong.
That cheating is the most important AI issue facing students.
Schools’ focus on AI’s potential impact on coursework and assessment is understandable, but narrow. If we are genuinely concerned about the role AI plays in our students’ lives, then we should be learning and talking about at least three other topics:
“Artificial intimacy.” People are forming relationships with bots. Psychologists and human development experts have been flagging this as a concern for years, yet I don’t see it raised in education spaces nearly as often as cheating. Bryan Alexander’s excellent overview captures just how many tools and use cases there are for bot companions as well as the potential benefits and harms to our human networks and relationships.
AI’s impact on the world beyond school. AI is transforming many of the career paths that interest our students. It is improving coding and medicine in fascinating ways, it is raising deep intellectual property and authorship questions in the creative arts, and it is threatening the integrity of media and journalism. Its multimodal capabilities are contributing to mis/disinformation in elections around the world (Marc Watkins offers some useful ideas for engaging students on this topic).
The ethical tradeoffs of using generative AI. I’ve written about this before, so I’ll just say that “AI ethics” is not about academic integrity; it’s about AI itself. When we teach students about algorithmic bias, AI’s impact on the environment and human labor, the copyrighted material that it is trained on, etc., we are helping them make ethical decisions about whether and how to use it.
That AI is neutral.
Something that really bothers me about the commonly used calculator analogy for generative AI: calculators are simple and neutral and AI is neither of those things. You can take ten different brands of calculator, input the same problem into all of them, and expect identical, accurate responses based on the rules of mathematics. AI doesn’t work like that: you can input the same prompt into ten different chatbots and get ten responses that vary in content, tone, accuracy, and style. These responses are informed by the particular dataset, training, and design of each model, all of which are informed by human biases and decision-making.
We should not accept AI output in the same way that we accept calculator output; it is neither definitive nor objective. As Joy Buolamwini writes in Unmasking AI, “Default settings are not neutral. They often reflect the coded gaze—the preferences of those who have the power to choose which subjects to focus on.” When we encourage people to “critically evaluate” AI outputs, we have to remember that this is not just for factual accuracy, but for which perspectives are and are not represented.
That adults can and should use AI at school but students can’t and shouldn’t.
In their book The Students Are Watching: Schools and the Moral Contract, Theodore and Nancy Sizer write, “The kids count on our consistency. Few qualities in adults annoy adolescents more than hypocrisy… The people in a school construct its values by the way they address its challenges in ordinary and extraordinary times…Institutions can bear witness, in good and bad times. That is, they can model certain kinds of behavior.”
Students lack models of AI decision-making that prioritize wellness and learning. In the absence of open discussion and modeling of AI use, they are making decisions based on a few things: curiosity, advice from peers and/or family members, stress, and the implicit and explicit incentives of “doing school” (grades, workload, lack of perceived value/relevance, etc.).
Of course we should be afraid of the “AI doom loop” where teachers use AI to generate materials for students and students use AI to complete them and then teachers use AI tools to grade that work. Nobody benefits in that scenario. But, I would argue that scenario will happen because we don’t talk about or teach effective use of generative AI for learning, not because we do.
What we need to hold on to
Generative AI is a tool over which we have agency.
As impressive as AI seems and as inevitable as its rise appears, we have power over it. Consider three very good pieces I read recently: “Why AI Isn’t Going to Make Art” by Ted Chiang; Matteo Wong’s response, “Ted Chiang is Wrong About Art”; and “Echoes of Concern—AI and Moral Agency,” by Sarah Hull and Joseph Fins. While explaining all of the ways AI will change art and medicine, the authors emphasize that decision-making is a fundamentally human act that requires intention, empathy, creativity, ethics, and morality, none of which are traits of generative AI. AI’s power is in its ability to assist human decision-making, not replace it.
I don’t believe generative AI is going to revolutionize education for the better, nor do I believe it’s going to destroy it. Only humans have the will and the ingenuity to do either of those things, with the assistance of AI or without it. Follow any of the current events that involve AI and you can see that its positive and negative applications are not driven by the technology, but by the humans who are deciding how to use that technology.
This is where schools have agency over AI. We teach students how to make decisions, how to be autonomous, ethical adults. How we approach AI at school now will affect decisions they make about AI, now and in the future.
School matters. We need to make the case for why.
Let’s try a proposal: the new problems that generative AI is raising in education are less important than the old problems it is exacerbating. Students are increasingly cynical about and disengaged from school. Here are two pieces with some data: one on K12 education from NPR and another on the “transactional” view of college from The Chronicle of Higher Education. The way school is designed is not activating students’ interest and has created an incentive structure that rewards performance over learning. Generative AI is not responsible for this dynamic; it’s just making this dynamic easier to exploit.
In an AI age, what’s the case for school? How would we make that case to ourselves and to others? And if the vision doesn’t match the reality, what needs to change?
Upcoming Ways to Connect with Me
Speaking, Facilitation, and Consultation
If you want to learn more about my work with schools and nonprofits, reach out for a conversation at eric@erichudson.co or take a look at my website. I’d love to hear about what you’re working on.
In-Person Events
September 24: In partnership with the Pennsylvania Association of Independent Schools (PAIS), I’ll be running two workshops at “Leading in an AI World,” an event for leadership teams and boards. Join me at The Baldwin School outside of Philadelphia, PA, USA.
November 20: I’ll be co-facilitating “Educational Leadership in the Age of AI” with Christina Lewellen and Shandor Simon at the New England Association of Schools and Colleges (NEASC) annual conference in Boston, MA, USA.
Online Workshops
October 2: I’ll be speaking at the School Marketing AI Summit, a free online event for communications and enrollment professionals. My talk is called “Four Priorities for Human-Centered AI in Schools.”
October 15: In partnership with the California Teachers Development Collaborative (CATDC), I’m facilitating “Making Sense of AI,” an introductory AI workshop for educators.
October 31: Also with CATDC, I’m launching a four-part online series called “Deepening our AI Practice” for educators who already have a working knowledge of generative AI.
Links!
Reshan Richards and Steve Valentine with an excellent guide on how to approach articulating AI guidelines for your school.
UNESCO has released its AI competency frameworks for teachers and for students.
These ideas on using AI in the classroom from the MIT Teaching Systems Lab would make for excellent core values/guidelines for schools. It should be no surprise that they come from interviews with teachers.
Anna Mills makes a thoughtful, balanced argument that using process-tracking technology in writing classes can increase student accountability in a lower stakes way.
Leon Furze and co-authors have updated their AI Assessment Scale (a model I’ve seen many schools use as inspiration for their AI policies). I would also highly recommend an article they link to in explaining their update: “Validity matters more than cheating.”
Troubling study of generative AI’s “covert racism,” where it reveals implicit bias against certain inputted dialects, even if the bots explicitly reject racial stereotyping.
Love this quote, but I'd change it slightly to read, "The way traditional school..."
"The way school is designed is not activating students’ interest and has created an incentive structure that rewards performance over learning. Generative AI is not responsible for this dynamic; it’s just making this dynamic easier to exploit."