Thanks for sharing your work and insights Eric. Thought you might like this stuff from the Institute for the Future, particularly the second item:
💬🤝 Platform trains Gen Z in respectful disagreement
The founder of Khan Academy has established a new program, “Dialogues,” which connects youth aged 14 to 18 online so that they learn to have productive conversations on divisive topics, such as abortion, climate change, and immigration. The intention is not to debate topics or convince their peers, but rather build skills in listening to and sharing perspectives while maintaining respect.
What if future generations learned the skills to navigate difficult conversations and differing opinions, just like they learned the alphabet?
😫👀 Students impose self-surveillance to prove they didn’t use AI
With many students using AI to cut corners, some who complete assignments all on their own are being accused by teachers of using AI’s help. One college student received a zero on an assignment when her professor incorrectly concluded that AI wrote it for her. Students are now proactively recording hours of work on their computer screens in case they need to prove their honest work to suspicious instructors.
What if future students have to work harder and harder just to prove their worth in face of AI’s growing power?
So much resonance here especially the quiet threads about attention, sovereignty, and the slow erosion of trust beneath the tech optimism.
Your observations read like field notes from a future we’re already inhabiting, even if we pretend otherwise.
I spend a lot of time looking at these fractures from a security and governance lens always interested in how we can anchor progress without losing the human signal.
Would be glad to exchange perspectives sometime. Thank you for mapping the terrain so honestly.
Points number 18 and 26 stood out to me the most - 110% agree with them. As much as the focus of a school should always be about teaching and learning, we can't forget that anyone working at a school in a non-teaching capacity is still working to support the teaching and learning happening every day. We can't forget about their PD on AI as well without risking a growing knowledge/skills gap between staff.
On AI "detectors"...yikes! They were effective/functional for what...2-3 months...2.5 years ago?! And then became totally obsolete (and damaging). Yet a common reaction from many teachers to suspected AI-written (or AI-aided) work is, "Phew, there's a tool to detect AI!" followed by "They must work, right?". Unfortunately, trusted plagiarism detection tools aren't doing much to dispel that myth either... I wonder beyond policy, how can we effectively spread the word on something like this that is truly truly super important. I've got way too many thoughts/opinions on AI detectors...I'll leave it there :D
And then lastly, curious to hear more of your thoughts (and others'!) on the SAMR model. Not a fan? But now finding some potential value?
Thanks, Juan! My only hesitancy around SAMR is that it introduces yet another set of jargon into the AI discourse, jargon that I’m not sure was all that successful the first time around. But, I like the way it asks us to reflect on why and how we integrate tech.
Great list, Eric. I concur with all of these. I’d love to attend one of your workshops and I say that very infrequently - I’ve stopped going to most AI PD’s because I haven’t learned very much so I’ve started offering them myself. Teachers are hungry for practical tools and use cases.
Brilliant as always, Eric. They all resonate with my experience. I'd underline no. 4 in a big, bright pen. 10, I'm hoping you're going to write more about in the future
Thanks for sharing your work and insights Eric. Thought you might like this stuff from the Institute for the Future, particularly the second item:
💬🤝 Platform trains Gen Z in respectful disagreement
The founder of Khan Academy has established a new program, “Dialogues,” which connects youth aged 14 to 18 online so that they learn to have productive conversations on divisive topics, such as abortion, climate change, and immigration. The intention is not to debate topics or convince their peers, but rather build skills in listening to and sharing perspectives while maintaining respect.
What if future generations learned the skills to navigate difficult conversations and differing opinions, just like they learned the alphabet?
https://www.fastcompany.com/91332886/sal-khan-new-dialogues-program
😫👀 Students impose self-surveillance to prove they didn’t use AI
With many students using AI to cut corners, some who complete assignments all on their own are being accused by teachers of using AI’s help. One college student received a zero on an assignment when her professor incorrectly concluded that AI wrote it for her. Students are now proactively recording hours of work on their computer screens in case they need to prove their honest work to suspicious instructors.
What if future students have to work harder and harder just to prove their worth in face of AI’s growing power?
https://www.nytimes.com/2025/05/17/style/ai-chatgpt-turnitin-students-cheating.html
So much resonance here especially the quiet threads about attention, sovereignty, and the slow erosion of trust beneath the tech optimism.
Your observations read like field notes from a future we’re already inhabiting, even if we pretend otherwise.
I spend a lot of time looking at these fractures from a security and governance lens always interested in how we can anchor progress without losing the human signal.
Would be glad to exchange perspectives sometime. Thank you for mapping the terrain so honestly.
Points number 18 and 26 stood out to me the most - 110% agree with them. As much as the focus of a school should always be about teaching and learning, we can't forget that anyone working at a school in a non-teaching capacity is still working to support the teaching and learning happening every day. We can't forget about their PD on AI as well without risking a growing knowledge/skills gap between staff.
On AI "detectors"...yikes! They were effective/functional for what...2-3 months...2.5 years ago?! And then became totally obsolete (and damaging). Yet a common reaction from many teachers to suspected AI-written (or AI-aided) work is, "Phew, there's a tool to detect AI!" followed by "They must work, right?". Unfortunately, trusted plagiarism detection tools aren't doing much to dispel that myth either... I wonder beyond policy, how can we effectively spread the word on something like this that is truly truly super important. I've got way too many thoughts/opinions on AI detectors...I'll leave it there :D
And then lastly, curious to hear more of your thoughts (and others'!) on the SAMR model. Not a fan? But now finding some potential value?
Great read as always! Cheers, Eric!
Thanks, Juan! My only hesitancy around SAMR is that it introduces yet another set of jargon into the AI discourse, jargon that I’m not sure was all that successful the first time around. But, I like the way it asks us to reflect on why and how we integrate tech.
Great list, Eric. I concur with all of these. I’d love to attend one of your workshops and I say that very infrequently - I’ve stopped going to most AI PD’s because I haven’t learned very much so I’ve started offering them myself. Teachers are hungry for practical tools and use cases.
Totally agree! Really tried to shape my sessions around that need.
Brilliant as always, Eric. They all resonate with my experience. I'd underline no. 4 in a big, bright pen. 10, I'm hoping you're going to write more about in the future