Back to School with AI, Part 2: Talk with Students about AI
"The classroom remains the most radical space of possibility." -bell hooks
In June, The New York Times podcast “The Daily” did an excellent episode on AI and education where they spoke to a number of students (mostly undergraduates) and faculty members about how ChatGPT affects their day-to-day work. I hope you’ll listen to the whole episode, but this exchange at the very end has stuck with me:
Professor: I suspect one reason it hit so hard for me is that a great many students never saw themselves on a voyage of discovery along with me. They saw themselves en route to a credential. And to some extent, the upset is my realizing that not everyone is going to see this as a magnificent voyage of discovery.
Reporter: Well, have you talked to your students about ChatGPT?
Professor: I need to do that at some point.
Why don’t we talk more with students about important issues in education? The students interviewed in this episode had nuanced reactions to using ChatGPT in school. Some were using it to cheat, yes, but many others were using it to learn by testing ideas or generating new ones or seeking feedback on their work. They expressed enthusiasm for what ChatGPT could help them do but also ambivalence and uncertainty about where the boundaries for appropriate vs. inappropriate use were. In other words, they were asking some of the same questions educators are asking each other.
Who is better to have the conversation about AI in education than students and educators? So, let’s talk with students about AI and its role in the learning process.
Some Guiding Principles
Possibilities over policy. If we talk to students about AI in the context of cheating and academic honesty only, we are 1) implicitly and explicitly creating an adversarial culture around AI and 2) missing the opportunity to explore a powerful tool in productive, generative ways. Of course schools need written guidelines for appropriate use of AI, but the tool is evolving so quickly, and our understanding of and competence with it so nascent, that we should be spending more time understanding the tools than composing policies about them. Exploring AI with students could be a genuinely authentic learning experience: it is relevant to students’ current and future lives, it requires novel solutions to unprecedented challenges, and it can empower students to assume ownership over their learning.
Adopt a “Beginner’s Mind.” Practicing the Zen Buddhist concept of shoshin is a useful way to approach teaching in general, but it’s especially important when tackling an innovation like AI. We can be blinded by our own expertise and/or fall prey to the human instinct to prioritize examples that validate our existing beliefs and desires. Embracing intellectual humility will make us more open to what our students can teach us about their experiences with AI.
Ensure access and equity. Talking openly with students about AI (about anything!) requires a learning environment that is psychologically safe and that promotes agency. This starts by respecting students as learning partners. Be explicit about why you want to collaborate with them on AI and what role you hope they play in that process. Do not assume students are comfortable with AI, know how to use it, or have what they need to access it. Find out what they need to begin collaborating with you on AI. At a minimum, make sure all of your students have the devices, accounts, time, and support they need to use these tools with you and on their own.
Things to Try with Students
Co-Create “AI Agreements.” My friend and frequent collaborator Kawai Lai introduced me to August Public’s concept of “team agreements.” Designed for professionals, teams co-create agreements to explicitly define how they want to work together. They are more responsive than rules, more concrete than norms, and more human-centered than institutional policies. Some examples of agreements teams of professionals have created:
“We agree to send agendas for meetings 24 hours in advance.”
“We agree not to expect replies to emails outside of business hours.”
Team agreements are specific, they address real tensions, and they are editable, meaning they should be revisited and updated regularly. When it comes to working with students on AI, I think co-creating these kinds of agreements with students could be powerful because it’s a collective effort to ensure collective accountability. Possible examples:
“We agree that if we are unsure about if use of AI is appropriate, we’ll ask before we act.”
“We agree that using AI to come up with ideas is appropriate.”
“We agree to share our experiences with AI in class to help each other.”
Negotiating the boundaries of appropriate and inappropriate use of AI will be an ongoing project in schools. Why not involve students in that conversation and be able to revisit and revise agreements as AI evolves?
Do your assignments together. Use class time to ask AI to complete an assignment you would normally ask students to do on their own. Feed the instructions as written to a chatbot and ask students to evaluate the result. Do they know enough about the subject to effectively evaluate the response? If so, how could they use AI to extend or more deeply test their knowledge? If not, what do they need to do to gain the knowledge they need? Could AI help them do that?
Make a bot together. I like Poe’s “Create a Bot” tool. With no coding knowledge, you can create a personalized AI chatbot built off of ChatGPT or Claude. Creating a bot is a fun way to learn about how context, rules, and prompts are essential to training AI to perform a specific function or play a certain role. For example, I created a “Home Cook Challenge” bot with the below context and rules.
If you and your students could have a bot for your class, what would that bot do? What are the context and rules required for that bot to perform its intended function well? Here’s detailed instructions on how to create a bot in Poe.
Show your work. In part one of this series, I suggested seven things teachers could try with AI to improve their own work and workflows. If you are already trying AI, show students what you’re doing. Ask them what they think. Tell them what you think. Your willingness to show your work models vulnerability and offers implicit permission to talk openly about using AI for learning.
Ask students to be teachers. If you show your work, students are more likely to show theirs. Tech columnist Kevin Roose wrote just yesterday that educators should “assume that 100 percent of their students are using ChatGPT and other generative A.I. tools on every assignment, in every subject, unless they’re being physically supervised inside a school building.” If this is true, give students the stage and let them share their own experiences. Encourage them to share when they found AI to actually be helpful for learning, when it wasn’t helpful, or when it just acted weird. Chances are, they have used or seen uses of AI that you haven’t even imagined yet.
Use AI to reimagine what’s possible in your class. I love this assignment
developed for his entrepreneurship students.Rather than try to “AI-proof” his course and his assignments, Mollick asked students to use AI to do work that might have seemed impossible for them in a pre-AI world. The key line for me: “I won’t penalize you for failing if you are too ambitious.” Collaborating with students on imagining how AI might extend our capabilities and transform the kind of work we can do is, for me, a way to empower students to take ownership of their learning.
Have you talked with students about AI? What have they said? How are you approaching AI with your students? Let’s talk more in the comments.
Links!
Yet another reason not to use AI detectors (via Tara García Mathewson at The Markup).
Vanderbilt University has disabled Turnitin’s AI detector and published its reasons why.
“When I raise my hand with a question about ‘How do I do this?’ I am also asking the question, ‘Who am I?’” Math teacher and Desmos founder
on what AI can’t provide students.I appreciate that the University of Arizona has provided faculty with not only sample syllabus statements on AI, but also questions they need to consider about the potential impacts different policies will have on students.
Prof. Siva Vaidhyanathan on what he did when his students used AI to cheat.
Thanks for reading! If you enjoyed this post, I hope you’ll share it with others. It’s free to subscribe to Learning on Purpose. If you have feedback or just want to connect, you can always reach me at eric@erichudson.co.
I have been glued to topics on AI, and as a 'creator', this technology is both exciting and scary as shit. Some say it will put people out of work and it will make creators be greater creators. Time will tell.
Oh, and this quote sticks: “I won’t penalize you for failing if you are too ambitious.” I'm stealing this one.
Great work Eric, and good luck on your new adventure!