It’s been a busy month, so I didn’t have time to compose a proper post, but I did want to check in and offer some observations from my time on the road. This is a wonderful time to work with educators: most of them are closing their school years and feeling tired and hopeful, celebratory and reflective, and willing to learn.
I have seen more authentic, critical engagement with generative AI in the last five weeks than I have in the last two years. The overall attitude at schools is (finally, happily) changing.
You can engage with AI, learn about it, and build AI literacy without demanding that your entire school commit to a single AI tool.
Effective “AI-aware” teachers do more than just bring generative AI tools into the classroom. They also make time and space for student reflection, feedback, and discussion: What is it like to use these tools for schoolwork? How useful are they? Would you use them for this purpose again? Why or why not?
For most schools and in most disciplines, any level of generative AI knowledge and skill should result in changes to assessment designs and practices. If AI training does not result in shifts in assessment, it’s not the right AI training.
There is increasing diversity in how AI is approached at home. I have met students whose parents have purchased a premium chatbot subscription for them for the explicit purpose of using AI for schoolwork. I have met other students whose parents have blocked chatbots on home networks for the explicit purpose of not using AI for schoolwork.
If you want to find the most innovative uses of AI at schools, do not look for institutions. Look for individuals.

I can’t believe I’m writing this, but it might be time to resurrect (or at least revisit and update) the SAMR model.
Schools who elevate early AI adopters in their own communities as resources are achieving two things: 1) ensuring that learning about AI is ongoing and peer-based and 2) sending a clear message about their values and priorities when it comes to AI.
Students appreciate it when their teachers talk to them about AI. They think it’s “cute” or even “cringe” when teachers try to teach them how to use AI. They do not appreciate being forced to use specific tools that they already know are not as good as tools available to the general public.
The most innovative uses of generative AI I have seen in schools lately do not involve chatbots at all.
You make assumptions about who is likely to embrace or to resist AI at your peril.
Decisions about which AI tools and platforms to adopt often prioritize vague promises of data security and privacy over authentic applications that will improve the work of the school. This is also true for policy-writing, professional development, and student engagement.
With or without approval from their schools, many educators and students have developed deep knowledge and skills when it comes to productive, interesting uses of AI. School culture is the most important factor in whether or not anyone else knows about that use, and whether or not those students and educators feel comfortable talking about it.
If you are worried that your school is “behind” on AI, this is a moment to get clear about what that means and what it would look like to be “ahead.”
Students and educators use AI for the same reasons: to make more time and space to do things that are more meaningful to them, to create work that—for whatever reason—they believe to be better than what they are capable of, or to simply find time to rest.
Schools continue to underestimate how much students are using AI. The students you catch “cheating” with AI are not representative of numbers or of use cases.
From my perspective, networked experiences are the most effective ways schools build their AI capacity. Just in June, I was a part of five: I closed a yearlong AI cohort of seven Bay Area schools, launched a similar cohort with 16 New England schools, facilitated AI design workshops at the STLinSTL conference, spent a few hours with early-career teachers at the Klingenstein Summer Institute, and returned for a second Summer AI Institute for Canadian independent schools. If you are not engaging with other schools on this topic and, importantly, being generous with your work and your knowledge, I don’t think you’re learning as much as you could be.
The people who work on the non-instructional side of schools (communications, finance, administration, etc.) could be leading AI innovation, but instead are often ignored in AI professional development.
Many “AI-forward” schools are asking teachers to use AI tools without addressing what really concerns teachers: student use, disruption to longstanding teaching practices, intellectual property rights, and environmental impact.
Many “AI-forward” schools are giving students AI tools when what students need (and what they are asking for) on AI is trust, dialogue, and skill development. Earlier this month a teacher told me a story about asking her students to use a school-approved AI tutoring platform for coursework rather than more powerful tools like ChatGPT or Claude. “It feels like you gave me a flip phone when I have access to a smartphone,” a student replied.
“AI-resistant” schools often use their resistance as a reason not to teach their educators and students about AI. Ignoring AI is not a component of resisting AI.
I’m starting to think schools need policies on the use of AI detectors more urgently than they need policies on the use of AI.
Educators are as likely as anyone else to defer to the authoritative tone of chatbot output. I have been amazed by how quickly adults are willing to believe what chatbots tell them, even when the output contradicts their expertise and experience.
Anywhere from 20-30% of people in my professional development workshops tell me they have never used a chatbot.
Students are talking to students about AI. Educators are talking to educators about AI. Rarely are these two groups talking to each other in any open, generative way.
I’d be curious if my observations align with yours. Share your thoughts in the comments!
Upcoming Ways to Connect With Me
Speaking, Facilitation, and Consultation
If you want to learn more about my work with schools and nonprofits, take a look at my website and reach out for a conversation. I’d love to hear about what you’re working on.
Online Workshops
October 16-17. I’ll be facilitating a two-part online workshop for school leaders on developing AI position statements and ethical guidelines. Offered as part of the National Association of Independent Schools (NAIS) Strategy Lab series.
Links!
If you read one thing about AI this week, read Alondra Nelson’s Amherst College commencement address.
Olivia Han, a high school student in Washington, USA, writes a break-up letter to ChatGPT.
Mary Helen Immordino-Yang’s research on the adolescent brain and “transcendent thinking” is so compelling. How could we engage students in this way on the topic of AI at school?
This AI Literacy Framework from the OECD, Code.org, the European Commission on Education, and others, is very good, if for no other reason than that it is skills-based, not content or tools-based.
I have written before about AI’s potential role in extending the mind. One of the creators of the theory of the extended mind, Andy Clark, offers his own, much more sophisticated, take.
Thomas Guskey reviews some recent research on the effectiveness of teacher professional development and has some suggestions for schools.
Thanks for sharing your work and insights Eric. Thought you might like this stuff from the Institute for the Future, particularly the second item:
💬🤝 Platform trains Gen Z in respectful disagreement
The founder of Khan Academy has established a new program, “Dialogues,” which connects youth aged 14 to 18 online so that they learn to have productive conversations on divisive topics, such as abortion, climate change, and immigration. The intention is not to debate topics or convince their peers, but rather build skills in listening to and sharing perspectives while maintaining respect.
What if future generations learned the skills to navigate difficult conversations and differing opinions, just like they learned the alphabet?
https://www.fastcompany.com/91332886/sal-khan-new-dialogues-program
😫👀 Students impose self-surveillance to prove they didn’t use AI
With many students using AI to cut corners, some who complete assignments all on their own are being accused by teachers of using AI’s help. One college student received a zero on an assignment when her professor incorrectly concluded that AI wrote it for her. Students are now proactively recording hours of work on their computer screens in case they need to prove their honest work to suspicious instructors.
What if future students have to work harder and harder just to prove their worth in face of AI’s growing power?
https://www.nytimes.com/2025/05/17/style/ai-chatgpt-turnitin-students-cheating.html
So much resonance here especially the quiet threads about attention, sovereignty, and the slow erosion of trust beneath the tech optimism.
Your observations read like field notes from a future we’re already inhabiting, even if we pretend otherwise.
I spend a lot of time looking at these fractures from a security and governance lens always interested in how we can anchor progress without losing the human signal.
Would be glad to exchange perspectives sometime. Thank you for mapping the terrain so honestly.