Do schools have an AI collective action problem?
We are sharing knowledge with each other. Should we be creating it together?
In many ways, generative AI is an authentic assessment for schools. It presents a novel challenge with no clear solution, a challenge that has real stakes, a challenge that requires interdisciplinary knowledge and transferable skills to address. Most importantly, doing this assessment connects us to the world beyond school in meaningful ways.
As Sally Brown and Kay Sambell write in their guide to authentic assessment, “More authentic tasks can also play a key role in developing students not only as individuals but also as active and fulfilled citizens by, for instance, fostering criticality, divergent thinking, agency, motivation to question commonly held assumptions, creativity, and a sense of pride and value in deeply engaging with complex knowledge, or potential contributions to the broader social good.”
If schools treated AI like the authentic assessment that it is, if we designed our work on AI in the way we want teachers to design authentic learning experiences for students, would our approach look different? Would we be engaging more deeply as a network, not just as individual actors?
I have written before about AI and designing authentic assessments for students, but what got me thinking about it this time was an experience I had earlier this week with adults. I partnered with Lakefield College School to design and facilitate a two-day AI Institute for Canadian independent schools. Fifty educators and administrators from 20 different schools participated. It was one of my favorite professional experiences of the last year, and I’ve spent a couple of days thinking about why.
Give people a meaningful job to do.
The Lakefield team had a clear and action-oriented purpose: move the AI work at your school forward by using the institute to refine guidelines for students and employees and design ways to build AI capacity. This is work almost every school I know of is doing, but this was a chance to do it together.
This goal immediately reframes the purpose of the program from knowledge acquisition to knowledge creation, specifically collaborative creation. There is a lot of helpful theory and research on how to do this for both children and adults, but I’ll offer Malcolm Knowles’ four principles of andragogy as one framework:
Adults need to be involved in the planning and evaluation of their instruction.
Experience (including mistakes) provides the basis for the learning activities.
Adults are most interested in learning subjects that have immediate relevance and impact to their job or personal life.
Adult learning is problem-centered rather than content-oriented.
Empower the learner.
This approach also positions the learner as the driver of the learning. Of the 14 hours of programming we designed, I was standing at the front of the room delivering content for less than two of them. The remainder of the program was dedicated to discussion and planning, visioning and prioritization exercises, coaching and feedback, short guest presentations (including Zooming with a robot!), participant-led sessions, and independent work time.
(I’ll share a short clip of Ameca’s robot for no other reason than I think you should see it.)
The idea was to elevate the expertise and experience that the people in the room brought to the subject. They all faced a similar challenge, they had all made meaningful progress on their own in addressing that challenge, and so what better solution than to empower them to act collectively to move that work forward?
Generosity is the rule, not the exception.
On the institute intake form, we asked for volunteers to share what their schools have done (not what they are planning to do) in terms of AI policy and capacity-building. Almost every school not only presented at the institute, but also shared their AI documentation as resources for others. As part of the design process, we included gallery walks and presentations of learning that asked participants to share their work, allowing them not only to get feedback from colleagues, but also to inspire those colleagues to think in new ways.
Some of the most positive feedback I heard from participants was about the open sharing of knowledge and resources. Many expressed this comment with surprise, which I found to be a little bittersweet.
Elevate practical innovation.
A major component of the generosity was the humility. Everything schools are doing on generative AI is a work in progress. It can be vulnerable-making to share imperfect ideas and to be open about the messiness and challenges of the work, but that is 1) how innovation actually works and 2) what schools want to know from each other. The way participants emphasized the practicality and realism of their work made it more accessible.
I was also really happy to see so many schools share the work of teachers. By a wide margin, the most interesting work I’m seeing in generative AI is occurring at the classroom level. Innovative teachers are clarifying the opportunities and pitfalls of generative AI by experimenting with assessment transformation, by engaging students as partners in exploration, and by embedding AI literacy and tools in their curriculum and instruction. This practical innovation can and should inform institutional decision-making about AI, especially if the school is prioritizing the student experience.
The experience is more than the agenda.
Lakefield is in a gorgeous, lakeside setting. You can’t turn around without seeing someone lounging in an Adirondack chair. There’s a working farm! The AI Institute took place at the same time as summer camps, so there were children and young adults all over campus. The outdoors played a meaningful role in our program: Head of School Anne-Marie Kee invited people on her morning dog walk through the woods, we went to the farm to pick raspberries and learn about the school’s experiential education program, and social activities took place at picnic tables and in backyards.
The setting provided a stark, energizing contrast with generative AI: it reminded us that being outside, being among people, and exploring new places are how we find the energy to address complex issues, especially AI, which can seem opaque and inhuman. I know we all don’t have access to a setting like Lakefield’s, but I was reminded how important it is to aim for learning environments that are more inspiring than a conference room.
How will we deepen our approach to AI in the coming year?
I’ve written before about the power of collective purpose to improve schools, but the experience at Lakefield got me thinking about collective action, or, more specifically, the collective action problem that I often see in independent schools, where schools do the hard work of addressing shared problems as individuals rather than as a network.
I’ve partnered with dozens of schools on AI and had conversations about it with countless others, and the most common question I get is, “What are other schools doing?” There’s a lot going on in this question, including fear of being wrong, an aversion to risk, a desire not to reinvent the wheel, a desire to compare or to compete, and a hope that the hard work of “getting it right” has already been done. This question drives us to spend hours sitting in audiences at conferences, emailing documents around, attending webinars, scrolling social media, reading articles (like this one!), and struggling to sift through enormous amounts of AI discourse trying to find the answer we think we need.
I’d love for us to rephrase this question to “How can I work with other schools on this?” In the coming year, we may need to prioritize collaboration over information, constructing knowledge together rather than trying to find it and use it for individual purposes. The time we have spent on searching could be reallocated to time spent on gathering.
A feeling that has been bothering me for a long time is that the amount of content on generative AI is overwhelming, yet somehow also not enough. The experience at Lakefield helped me understand this feeling better: we keep looking for answers when we might be better equipped to create them ourselves.
Upcoming Ways to Connect with Me
Speaking, Facilitation, and Consultation. If you want to learn more about my work with schools and nonprofits, reach out for a conversation at eric@erichudson.co or take a look at my website. I’d love to hear about what you’re working on.
I’m re-teaming with the amazing Kawai Lai for “Unlocking Your Facilitation Potential,” a CATDC workshop on designing and facilitating effective, human-centered meetings. It will be held in Los Angeles, CA, USA, from August 15-16.
Links!
These five “techno-skeptical questions” about AI are excellent ways to have a digital literacy and ethics conversation with students and colleagues.
Stefan Bauschard did a rigorous review of the utility and security of Brisk, a popular AI tool for teachers.
Great interview with sociologist Sherry Turkle on “artificial intimacy” and how we should think about the idea of relationships with bots.
“LLMs are trained and optimized to excel at following instructions, and assignments are instructions; therefore, LLMs are pretty good at doing assignments.” David Wiley offers a concise explanation of why we can’t really “AI-proof” writing assignments.
“The resistance to AI in education isn’t really about learning.”
Thanks for publishing these reflections, Eric. It was a pleasure having you as a facilitator this past summer at Lakefield. You allowed us to hit home runs after home runs. Thanks also for the links to further reading at the end of the post. I am a new subscriber to your Substack. Best
Eric, this sentence jumped out at me! "...we keep looking for answers when we might be better equipped to create them ourselves." Like you, when I work with schools, I often hear, "What are other schools doing?” I love this curiosity and often wonder what assumptions and motivations are layered into this question. I'm working on ways of unpacking this question. I have a hunch that a possible pathway to exploring opportunities lies within collective and individual efficacy.