If you use AI, you're a bad teacher.
If you use AI, you’re a good teacher.
Here’s a common scenario from my school visits: at the end of a session, an educator or two will come up to me and, in a lowered voice, say, “I use generative AI for X, and I love it. Do you think it’s OK to use it in that way?”
Here’s another common scenario: at the end of a session, an educator or two will come up to me and, in a lowered voice, say, “I think it’s wrong that my colleagues use generative AI for X, and I think we should prohibit it. What do you think?”
Honestly, what I think matters a lot less than what you and your colleagues think and, more importantly, what you can agree on. So, let’s talk about it.
If you use generative AI for feedback on student work, is that good teaching?
A lot of edtech companies think so. Push-button AI tools for educators like MagicSchool and SchoolAI have feedback features. The Chrome extension Brisk generates feedback in the form of Google doc comments that teachers review, approve, and post on student work. Snorkl offers immediate AI-generated feedback on digital whiteboard activities.
Using technology to streamline feedback workflows is not new. Tyler Rablin, a teacher and assessment consultant who does some of the best, most practical work on student-centered grading I’ve ever seen, wrote in 2014 about how he used text expanders to make his feedback process both more efficient (using coded abbreviations to embed pre-written comments in student work) and more effective (those comments include links to helpful resources).
If you’re not familiar with what research says about designing and delivering effective feedback, read Andrew Housiaux’s recent, excellent summary and consider these scenarios;
To address students’ desire for more and more frequent feedback, a teacher shows students how to use rubrics, assignment prompts, and models of excellence to prompt chatbots to provide feedback aligned to the teacher’s expectations.
After handing back work with his feedback on it, a teacher encourages students to read the feedback and, if they want, use a chatbot to clarify the feedback and offer advice on how to apply it to the next assessment.
A teacher makes voice memos of feedback for students, then uploads those audio files into a bot and asks it to create more concise, polished text feedback to share with students.
If you use generative AI to differentiate instruction and content, is that good teaching?
Again, a promise many edtech companies and organizations make is the promise of personalized learning powered by AI. Alpha Schools and Khan Lab School have put adaptive AI tutors at the center of their value propositions.
Teachers are exploring applications of generative AI in this area, too.
A teacher who works with English language learners uses AI to create differentiated worksheets for a field trip to a local museum. The worksheets are aligned to her students’ varying skill levels.
After her class performs poorly on a math quiz, the teacher shares that quiz with a chatbot and asks it to create new versions to offer students more retake opportunities.
A teacher describes the various learning plans of students in his class and asks AI to adapt his lesson plans and assessments accordingly.
Teachers use Eleven to create audio versions of text to support students with learning differences or or to allow language learners to hear a text in the target language.
If you use generative AI to create or redesign learning experiences, is that good teaching?
Well before the emergence of generative AI, teachers had been under pressure to adapt: adapt to emerging technologies, adapt to research, adapt to shifting cultural norms and politics, adapt to new generations of young people. Much of this adaptation comes in the form of changing what we teach and how we teach it.
A teacher is asked to read research articles on cognitive science and human development and apply them to his teaching. He asks a chatbot to explain those articles in the context of his discipline and provide adaptations to his curriculum aligned to those principles.
Inspired by a professional development workshop on visible thinking, a teacher uploads one of her units into a chatbot and asks it how to redesign classroom activities to be more focused on student-driven thinking protocols.
A teacher is about to embark on a current events exploration with her class. She surveys her students for their primary interests and asks them to share news articles with her about events that have captured their attention. She gives this information to a chatbot and asks it to imagine a variety of ways she can run her class so these interests are addressed.
If you ask students to use generative AI, is that good teaching?
In my experience, student use of AI at school falls on a spectrum. At the far ends are behaviors that I think are troubling and that schools should proactively disrupt. At one end, students are on devices in class and, when asked a question, simply relay the question into a bot without bothering to think for themselves. At the other end, schools deploy an arsenal of detection and surveillance tools in an attempt to prevent student use, leading to a culture of suspicion and fear around AI among both teachers and students.
In between these two extremes is a wide gray area.
Aleksandra Kasztalska worked with students on using AI tools as part of the research and writing process, had them disclose and reflect on their AI use, and then dedicated class time to discussing their experiences.
A student asks a teacher a question in class and the teacher says, “I’m in the middle of helping someone else. Ask ChatGPT for an answer so you can move forward.”
Teachers encourage their students to use AI chatbots, specifically voice mode, to practice at home for discussions, debates, presentations, or other oral assessments.
If you use generative AI to analyze student work and data, is that good teaching?
Schools are notorious for collecting a lot of data but not organizing, analyzing, or acting on that data effectively. At the same time, they are under pressure to make “data-informed decisions” and identify and assess measurable outcomes. At the classroom level, teachers are often doing data analysis organically and even subconsciously: they are inundated with information from their students and make innumerable decisions based on that information. That cognitive load can be heavy.
A teacher shares a whole class’ work on an assessment with AI and asks it to recognize patterns, highlight differences, and offer feedback on the design of the assessment.
A teacher uploads student exit tickets into a bot and asks it to design lesson plans and other interventions in response to that data.
In preparation for a student-led conference, a teacher shares a portfolio of one student’s work with AI and asks it to create questions to ask and specific examples of work to highlight during the conference.

What’s “good teaching”?
Educators now must make what Annette Vee calls “AI-aware” decisions about their work.
AI-aware decisions require that the educator making the decisions has sufficient understanding of AI. I’ve written before about Stanford’s four categories of critical AI literacy, a useful framework for thinking about what educators need to know about and be able to do with AI. Beyond foundational knowledge and skills, educators need guidance from their schools: How should they think about data privacy, student safety, approved tools, and academic integrity when it comes to the use of emerging technologies like AI?
But, these decisions should not be driven solely by AI. I am constantly asked for suggestions of “best practices” for AI use in teaching, and I think the practices we really need to be addressing are how we define and assess good teaching in an AI age.
If the teacher is implementing new approaches in ways they did not think possible before, is that good teaching?
If the teacher is updating practices to better respond to student feedback, is that good teaching?
If the teacher is providing more frequent, targeted feedback and more personalized instruction, is that good teaching?
If the teacher is aligning instruction and assessment to research on learning and human development, is that good teaching?
If the teacher is experimenting with new tools to push the boundaries of their practice, is that good teaching?
If the teacher is updating or transforming elements of their courses to be responsive to the world beyond school, is that good teaching?
If the teacher is using AI in service of these goals, is that good teaching?
As I hope is becoming clear, how we assess teaching—by its intentionality, its inventiveness, its understanding of people and content and pedagogy—should be a model for how we assess learning. If generative AI requires us to question, or at least be clearer about, the criteria by which we assess our work, then it has done us a favor. Reflective practice is good teaching.
Upcoming Ways to Connect With Me
Speaking, Facilitation, and Consultation
If you want to learn more about my work with schools and nonprofits, take a look at my website and reach out for a conversation. I’d love to hear about what you’re working on.
In-Person Events
October 22 and 24. Kawai Lai and I will be facilitating "Leading from the Middle: Managing Up, Down, and Across," a one-day professional learning workshop for middle leaders (October 22nd session offered in Philadelphia, PA, USA. October 24th in Pittsburgh, PA, USA). Designed for department chairs, deans, and others in middle management, the session offers practical, human-centered ways to support those we supervise and those who supervise us. Offered in partnership with the Pennsylvania Association of Independent Schools.
Online Workshops
September 24. I’ll be facilitating “Making Sense of AI,” a workshop that introduces the knowledge and skills required to address AI at school and in the classroom. Offered in partnership with the California Teacher Development Collaborative (CATDC). Open to all, whether or not you live/work in California.
October 15. I’ve partnered with CATDC on a new workshop, “Practical AI for School Leaders,” a hands-on workshop that focuses on the uses and tools of AI relevant to the day-to-day work of school administrators in academic and non-academic roles.
October 16-17. I’ll be facilitating a two-part online workshop for school leaders on developing AI position statements and ethical guidelines. Offered as part of the National Association of Independent Schools (NAIS) Strategy Lab series.
Links!
As I continue seeking specifics on the environmental impact of generative AI, I’m learning a lot from Andy Masley and Sasha Luccioni.
Jane Beckwith has developed an AI Friction Scale to support students in understanding the role and impact of generative AI in their work. I love the alignment with the AI Assessment Scale (Furze, et al).
Sydney Sullivan describes how she collaborated with her students on an AI policy for her class.
“I am a proud Luddite in the classroom. There is no middle ground.”
“Will teachers mark down student work assisted by GenAI, and should they?” Findings from 33 interviews with university instructors.
Tony Frontier has been conducting focus groups on AI with high school students and shares some of their thoughts.
Sarah Elaine Eaton makes a distinction between “citation” and “attribution” and explains why we should be focused on the latter when it comes to AI.
Bryan Alexander with his latest scan of how AI shows up in culture. I learn so much from these pieces.
I had a great conversation with Peter Baron on the Moonshot podcast about what I’m seeing in my AI work with schools.



"Reflective practice is good teaching." Without question! Loved the scenarios throughout this piece, too, which is helpful for myriad conversations that are happening more regularly.
I'd also add: for me what's super helpful is whether they're willing to be transparent about the AI usage with students? For those who are "using AI for X" but aren't being open about it, then that's the only real red flag for me—everything else is the new, increasingly "grey area" of this new age.
Thanks for this post!
This is a wonderful post, thank you. Love the way you consider the gray areas-- that it's not just "good" or "bad." And that you frame it all in what makes for good teaching. I'd add, "and that students are learning."
I linked this article in my most recent post, which I hope helps more educators see this post.