When it comes to learning and teaching, AI is a design challenge, not a technology challenge.
We are making, and will continue to make, choices about what, why, and how we want students to learn in our new AI era. In many cases, this means integrating AI into coursework and teaching students how to use it responsibly. In other cases, though, it might mean purposefully and strategically going tech-free, and I wanted to dedicate at least one post to that.
By this point, I hope it’s well-documented that I think we should all learn how to use AI. These tools are too powerful and too accessible for us to simply ignore them, and I believe that technical literacy in AI will be a minimum requirement of working in schools, like understanding how to use email or to search the internet or to navigate a learning management system or to work in collaborative documents. Plus, there’s so many interesting ways to use it as teachers, with students, and in school leadership.
But, part of knowing how to use AI is knowing when not to use it.
Tech-Free Options for Learning Design in an AI Age
Let’s start with a proposal: Every assignment in your course that has a take-home or online component is “AI-vulnerable,” by which I mean there are ways learners with access to generative AI could use it to perform a task that you think is critical for the learner to perform themselves.
If you accept this premise, then you as the designer of assignments have to make a choice about if and how to address AI-vulnerability. Maha Bali lays out four clear options:
Make AI use impossible.
Discourage AI use by redesigning assessments to forms AI would not perform well.
Allow AI use within boundaries.
Allow indiscriminate AI use.
For this post, I want to focus on the first two options, especially the second, where questions about AI intersect with a lot of important questions about the purpose and design of assessments.
1. Make AI use impossible.
The only way to make AI use impossible is to design the activity to be completed in class using analog tools. Supervised in-class work can offer teachers authentic evidence of student understanding, and using analog tools not only prevents access to AI, but also prevents other distractions related to technology use.
However, class time is limited, meaning teachers and students might not have the time they need or want to do their best work. In addition, relying on analog tools can narrow the scope of what’s possible in an assignment, and it can prevent access to technology that supports learning, especially when it comes to differentiation. As a former English teacher, I couldn’t imagine moving all of my writing assignments into class; modern writing is deeply intertwined with technology, and I wouldn’t want students to lose the time, space, and practice they need to wrestle with complex ideas and to develop a personal writing process that can last them a lifetime.
2. Discourage AI use by redesigning assessments.
To discourage or disincentivize AI-generated products like essays or reports or code, I think it’s more fruitful to address the design of the learning process.
Make Thinking Visible, Collaborative, and Empowering
A few months ago, I attended a wonderful presentation by two English teachers at The Nueva School, Allen Frost and Claire Yeo. As teachers, Frost and Yeo see the core of their work as developing students’ identities as writers and thinkers, and they see that as a collaborative, student-centered process. They also see independent, out-of-class writing as an essential learning task.
So, they have moved a lot of the writing process into class and centered student voice and collaboration in that move. By using visible thinking exercises and structured discussions, by asking students to complete tools like concept maps or graphic organizers by hand, by asking students to write across genres in formats that are non-traditional, they challenge students to come up with their own ideas, refine and defend them, and present them in novel ways (and help their peers do the same).
Approaches like this model for students the rigorous process that goes into thinking deeply and writing well, demonstrate how that process is relational and ongoing, and make learning visible in a way that helps students reflect on and apply their ideas. On a practical level, students now have a clear document of their process, allowing them and their teachers to more easily trace where ideas and evidence come from.
These practices are aligned to what Ron Ritchart calls “cultures of thinking” as well as Wendy Sutherland-Smith and Philip Dawson’s three “fundamental needs” of engaging students in complex learning tasks:
Autonomy: having real choice about topic and mode, and seeing how the assessment meaningfully connects with their life and career.
Competence: being supported to build confidence and skills gradually.
Relatedness: feeling connected to teachers and peers and that they matter.
Without a lot of user effort and detailed prompting, AI performs poorly on tasks that don’t have clear, preexisting scaffolds (like the traditional five-paragraph essay) and it also doesn’t perform well on replicating a person’s particular voice. What’s more, students tell me that when they are working on an idea that they want to express in their own voice (as Frost and Yeo work very hard to help them do), they don’t want to use AI. The output is generic, it can’t cite text well, and it often takes longer to get AI to “sound authentic” than it would for the student to do it themselves.
If we believe students should be able to do work outside of class using technology, we can’t prevent all of them from using AI, but through certain design choices we can discourage its use and make it less appealing.
Ask Students to Explain Their Understanding
As Tom Sherrington has said, “understanding is the capacity to explain.”
In reviewing the College Board’s AI guidelines for the AP Capstone and AP Seminar courses, I was struck that their expectation was very simple and very analog: introduce “checkpoints that take the form of short conversations with students during which students make their thinking and decision-making visible (similar to an oral defense).”
This is, essentially, retrieval practice, a strategy that research has shown to be effective for student learning. Even if students are using AI to support their schoolwork, their ability to explain what they know in real time to another person is a way to assess if they are learning. It’s also a human-centered assessment: you get to engage directly with students and learn more than what you might from a standard quiz. And, these connections need not be long. John Spencer argues they can be five minutes or shorter.
Take a Case Study Approach to AI Ethics
I have been fortunate to get the chance to meet with and hear from students in many of my visits to schools. Students’ views of and experiences with AI are as varied as adults’ (and early research backs up my anecdotal data). Students share with me their concerns about the ethical implications of using AI: its biases, its environmental impact, its use of copyrighted material for training, etc. They ask me whether they will be able to have the careers they want in an AI world. They ask me what they should, and should not be, learning given that AI will be such a powerful knowledge assistant.
Ethical use of AI matters in every academic discipline (in school and beyond). Leon Furze’s excellent Teaching AI Ethics Series breaks down ethical considerations into nine categories and offers case studies relevant to many different disciplines that help teachers and students address each category.
The case study method is a pedagogically sound way to bring real-world issues into the classroom, and it is particularly useful for AI, which presents new and unresolved ethical concerns nearly every day. It engages students and educators in concrete, complex conversations without easy answers. If everyone in a classroom put their devices away and spent time thinking and talking about the ethics of using AI, everyone in the classroom would become better, more critical users of it.
How do we become AI-conscious designers of learning experiences?
Every time I have a conversation about AI, the conversation quickly becomes about something much bigger than AI.
“Should we encourage educators to use AI to write college recommendations or narrative reports?” becomes “Well, what’s the point of college recommendations and narrative reports?”
“Should we encourage students to use AI to write?” becomes “Well, why do we ask students to write?”
“Should we encourage students to use AI for help on homework?” becomes “Well, what do we believe about appropriate academic assistance, whether it's AI, tutors, the internet, parents, or peers?”
These conversations can feel overwhelming, but I find them reassuring because they are not about AI, they are about purpose. And, purpose is a critical component of high-quality design. If AI drives us to become clearer about the purpose of what we do, then we will be better designers of learning experiences, whether we use AI for them or not.
Upcoming Ways to Connect with Me
Speaking, Facilitation, and Consultation. I’m currently planning for the second half of 2024 (June to December). If you want to learn more about my consulting work, reach out for a conversation at eric@erichudson.co or take a look at my website. I’d love to hear about what you’re working on!
Online Workshops. I’m thrilled to continue my partnership with the California Teacher Development Collaborative (CATDC) with two more online workshops. Join us on March 18 for “Making Sense of AI,” an introduction to AI for educators, and on April 17 for “Leveling Up Our AI Practice,” a workshop for educators with AI experience who are looking to build new skills. Both workshops are open to all, whether or not you live/work in California.
Conferences. I will be facilitating workshops at the Summits for Transformative Learning in Atlanta, GA, USA, March 11-12 (STLinATL) and in St. Louis, MO, USA, May 30-31 (STLinSTL). I’m also a proud member of the board of the Association of Technology Leaders in Independent Schools (ATLIS) and will be attending their annual conference in Reno, NV, USA, April 7-10.
Links!
I’ve been enjoying
’s “Research Insights” series where he experiments with using AI to understand emerging research on student use of AI.“Students understand that the rules distinguishing cheating from not cheating in school are like the rules of a game. But in this case it's a game that they did not choose to play... Under these conditions, it's hard to respect the rules.” Peter Gray makes a clear, compelling link between cheating in academia and cheating in school.
Really fun: Nir Zicherman explains how large language models work by comparing them to planning a menu.
I’ve been waiting for an update on the Buxton School’s ban on smartphones and here’s what the school has learned one year later.
What does it look like to grade 150 essays in two weeks?
sheds light on what it means to be an English teacher and what it means to invest in good feedback.
I would add to the point on asking students to explain their understanding, that when you do allow/encourage AI use, you should get them to explain how and why they used the AI. This can help them understand the point you make about getting them to be reflective on what appropriate AI use looks like.